Neural Network Learning: Testing Bounds on Sample Complexity

نویسندگان

  • Joaquim Marques de Sá
  • Fernando Sereno
  • Luís A. Alexandre
چکیده

Several authors have theoretically determined distribution-free bounds on sample complexity. Formulas based on several learning paradigms have been presented. However, little is known on how these formulas perform and compare with each other in practice. To our knowledge, controlled experimental results using these formulas, and comparing of their behavior, have not so far been presented. The present paper represents a contribution to filling up this gap, providing experimentally controlled results on how simple perceptrons trained by gradient descent or by the support vector approach comply with these bounds in practice.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Size-Independent Sample Complexity of Neural Networks

We study the sample complexity of learning neural networks, by providing new bounds on their Rademacher complexity assuming norm constraints on the parameter matrix of each layer. Compared to previous work, these complexity bounds have improved dependence on the network depth, and under some additional assumptions, are fully independent of the network size (both depth and width). These results ...

متن کامل

Covering Number Bounds of Certain Regularized Linear Function Classes

Recently, sample complexity bounds have been derived for problems involving linear functions such as neural networks and support vector machines. In many of these theoretical studies, the concept of covering numbers played an important role. It is thus useful to study covering numbers for linear function classes. In this paper, we investigate two closely related methods to derive upper bounds o...

متن کامل

Sample Complexity for Learning Recurrent

Recurrent perceptron classifiers generalize the usual perceptron model. They correspond to linear transformations of input vectors obtained by means of “autoregressive movingaverage schemes”, or infinite impulse response filters, and allow taking into account those correlations and dependences among input coordinates which arise from linear digital filtering. This paper provides tight bounds on...

متن کامل

Analysis of Regularized Linear Functions for Classification Problems

Recently, sample complexity bounds have been derived for problems involving linear functions such as neural networks and support vector machines. In this paper, we extend some theoretical results in this area by providing convergence analysis for regularized linear functions with an emphasis on classi cation problems. The class of methods we study in this paper generalize support vector machine...

متن کامل

Beating the Perils of Non-Convexity: Guaranteed Training of Neural Networks using Tensor Methods

Training neural networks is a challenging non-convex optimization problem, and backpropagation or gradient descent can get stuck in spurious local optima. We propose a novel algorithm based on tensor decomposition for training a two-layer neural network. We prove efficient risk bounds for our proposed method, with a polynomial sample complexity in the relevant parameters, such as input dimensio...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2004